Stochastic Convex Optimization with Multiple Objectives
نویسندگان
چکیده
In this paper, we are interested in the development of efficient algorithms for convex optimization problems in the simultaneous presence of multiple objectives and stochasticity in the first-order information. We cast the stochastic multiple objective optimization problem into a constrained optimization problem by choosing one function as the objective and try to bound other objectives by appropriate thresholds. We first examine a two stages exploration-exploitation based algorithm which first approximates the stochastic objectives by sampling and then solves a constrained stochastic optimization problem by projected gradient method. This method attains a suboptimal convergence rate even under strong assumption on the objectives. Our second approach is an efficient primal-dual stochastic algorithm. It leverages on the theory of Lagrangian method in constrained optimization and attains the optimal convergence rate of O(1/ √ T ) in high probability for general Lipschitz continuous objectives.
منابع مشابه
A multiple objective approach for joint ordering and pricing planning problem with stochastic lead times
The integration of marketing and demand with logistics and inventories (supply side of companies) may cause multiple improvements; it can revolutionize the management of the revenue of rental companies, hotels, and airlines. In this paper, we develop a multi-objective pricing-inventory model for a retailer. Maximizing the retailer's profit and the service level are the objectives, and shorta...
متن کاملVariance Reduction for Faster Non-Convex Optimization
We consider the fundamental problem in non-convex optimization of efficiently reaching a stationary point. In contrast to the convex case, in the long history of this basic problem, the only known theoretical results on first-order non-convex optimization remain to be full gradient descent that converges in O(1/ε) iterations for smooth objectives, and stochastic gradient descent that converges ...
متن کاملDSA: Decentralized Double Stochastic Averaging Gradient Algorithm
This paper considers convex optimization problems where nodes of a network have access to summands of a global objective. Each of these local objectives is further assumed to be an average of a finite set of functions. The motivation for this setup is to solve large scale machine learning problems where elements of the training set are distributed to multiple computational elements. The decentr...
متن کاملOptimal Solutions for Adaptive Search Problems with Entropy Objectives
The problem of searching for an unknown object occurs in important applications ranging from security, medicine and defense. Sensors with the capability to process information rapidly require adaptive algorithms to control their search in response to noisy observations. In this paper, we discuss classes of dynamic, adaptive search problems, and formulate the resulting sensor control problems as...
متن کاملDynamic Stochastic Approximation for Multi-stage Stochastic Optimization
In this paper, we consider multi-stage stochastic optimization problems with convex objectives and conic constraints at each stage. We present a new stochastic first-order method, namely the dynamic stochastic approximation (DSA) algorithm, for solving these types of stochastic optimization problems. We show that DSA can achieve an optimal O(1/ǫ4) rate of convergence in terms of the total numbe...
متن کامل